-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
future tasks should not cause anything to happen if the instance is suspended #952
Conversation
…ance is suspended" This reverts commit 05b50df.
Warning Rate Limit Exceeded@jasquat has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 28 minutes and 7 seconds before requesting another review. How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. WalkthroughThe recent updates to the SpiffWorkflow backend aim to optimize task management and improve efficiency. Changes include refactoring task processing methods, introducing new task handling capabilities, enhancing task origin tracking, and improving task queuing and archiving based on process instance statuses. These modifications are geared towards enhancing task management processes and boosting system reliability. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: .coderabbit.yaml
Files ignored due to path filters (1)
spiffworkflow-backend/migrations/versions/acf20342181e_.py
is excluded by:!spiffworkflow-backend/migrations/**
Files selected for processing (7)
- spiffworkflow-backend/src/spiffworkflow_backend/background_processing/background_processing_service.py (2 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/background_processing/celery_tasks/process_instance_task_producer.py (1 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/models/future_task.py (1 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/models/process_instance.py (4 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_processor.py (3 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_service.py (1 hunks)
- spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_background_processing_service.py (1 hunks)
Additional comments: 7
spiffworkflow-backend/src/spiffworkflow_backend/background_processing/celery_tasks/process_instance_task_producer.py (1)
- 19-24: The addition of
"producer_identifier": "future_task"
toargs_to_celery
is a good enhancement for tracking the origin of tasks in the queue. This change should improve the system's ability to trace task execution paths and diagnose issues related to task processing.spiffworkflow-backend/src/spiffworkflow_backend/models/future_task.py (1)
- 20-20: The addition of the
archived_for_process_instance_status
boolean column to theFutureTaskModel
is a significant change that aligns with the PR's objective to manage future tasks in suspended process instances effectively. This field will enable the system to archive future tasks appropriately and ensure they are only processed when their corresponding process instances are active.spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_background_processing_service.py (1)
- 13-66: The tests added for
BackgroundProcessingService
cover scenarios for processing future tasks with and without processable future tasks, which is crucial for validating the new archiving mechanism and the processing logic for future tasks. These tests ensure that the system behaves as expected when future tasks are to be processed or archived based on the status of their corresponding process instances. It's good to see comprehensive test coverage for these new functionalities.spiffworkflow-backend/src/spiffworkflow_backend/background_processing/background_processing_service.py (1)
- 61-103: Refactoring
process_future_tasks
into a class methoddo_process_future_tasks
and addingimminent_future_tasks
for handling future tasks more efficiently is a positive change. This refactoring improves the modularity and readability of the code. However, ensure that the logic for archiving future tasks (future_task.archived_for_process_instance_status = True
) and the subsequent database commit is thoroughly tested, especially in high-concurrency environments, to avoid any race conditions or inconsistencies.spiffworkflow-backend/src/spiffworkflow_backend/models/process_instance.py (2)
- 125-133: The
future_tasks_query
method is a valuable addition to theProcessInstanceModel
, providing a straightforward way to retrieve future tasks related to a specific process instance. This method enhances the model's capabilities and aligns with the PR's objectives to manage future tasks effectively.- 170-172: The
allowed_to_run
method is a simple yet effective way to determine if a process instance can proceed based on its status. This method's addition is crucial for the new logic introduced in the PR, ensuring that tasks are processed only when appropriate.spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_service.py (1)
- 45-45: The addition of the
TaskModel
import statement is necessary for the new functionalities introduced in the PR. It's good to see that dependencies are being managed correctly, ensuring that the new code has access to the required models.
def bring_archived_future_tasks_back_to_life(self) -> None: | ||
archived_future_tasks = ( | ||
self.process_instance_model.future_tasks_query() | ||
.filter(FutureTaskModel.archived_for_process_instance_status == True) # noqa: E712 | ||
.all() | ||
) | ||
for archived_future_task in archived_future_tasks: | ||
archived_future_task.archived_for_process_instance_status = False | ||
db.session.add(archived_future_task) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method bring_archived_future_tasks_back_to_life
correctly updates the archived_for_process_instance_status
flag for archived future tasks. However, it's important to commit the session changes within the method to ensure data consistency and handle potential exceptions gracefully.
+ db.session.commit()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def bring_archived_future_tasks_back_to_life(self) -> None: | |
archived_future_tasks = ( | |
self.process_instance_model.future_tasks_query() | |
.filter(FutureTaskModel.archived_for_process_instance_status == True) # noqa: E712 | |
.all() | |
) | |
for archived_future_task in archived_future_tasks: | |
archived_future_task.archived_for_process_instance_status = False | |
db.session.add(archived_future_task) | |
def bring_archived_future_tasks_back_to_life(self) -> None: | |
archived_future_tasks = ( | |
self.process_instance_model.future_tasks_query() | |
.filter(FutureTaskModel.archived_for_process_instance_status == True) # noqa: E712 | |
.all() | |
) | |
for archived_future_task in archived_future_tasks: | |
archived_future_task.archived_for_process_instance_status = False | |
db.session.add(archived_future_task) | |
db.session.commit() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't want this logical transaction to partially succeed. the commit at the end of the resume method is when the logical transaction is complete and when we want to commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@burnettk, understood. It makes sense to manage the transaction commit at the end of the resume method to ensure the entire logical transaction completes successfully as a whole. Thank you for clarifying your approach to transaction management.
def resume(self) -> None: | ||
self.process_instance_model.status = ProcessInstanceStatus.waiting.value | ||
db.session.add(self.process_instance_model) | ||
self.bring_archived_future_tasks_back_to_life() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resume
method correctly calls bring_archived_future_tasks_back_to_life
to unarchive future tasks. However, it's recommended to handle potential exceptions and ensure the database session is committed only if all operations succeed to maintain data integrity.
Consider wrapping the operations in a try-except block and commit the session in a finally block if no exceptions occur.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since it is committing at the end of the resume method, it should be logically equivalent to what you're suggesting. it will only commit if all operations succeed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@burnettk You're absolutely right. The commit at the end of the resume
method does ensure that all operations within the method must succeed for the changes to be persisted, effectively maintaining data integrity as you've described. This approach indeed makes the explicit use of a try-except block for transaction management unnecessary in this context. Thank you for pointing this out and clarifying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: .coderabbit.yaml
Files selected for processing (1)
- spiffworkflow-backend/src/spiffworkflow_backend/models/process_instance.py (5 hunks)
Files skipped from review as they are similar to previous changes (1)
- spiffworkflow-backend/src/spiffworkflow_backend/models/process_instance.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# the repo requires one approval. if a dependabot change passes tests, that is good enough. | ||
- name: approve pr | ||
run: gh pr review --approve "$(cat pr_number)" | ||
env: | ||
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} | ||
# if the merge --auto flag were added, and if the repo allowed it at https://github.com/sartography/spiff-arena/settings, | ||
# it would set up the pr to auto merge when all requirements were met. but we just want to merge now. | ||
- name: set up pr to auto merge when all requirements are met | ||
run: gh pr merge --squash "$(cat pr_number)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The command unzip pr_number.zip
lacks a -d
option to specify the extraction directory, which could lead to unexpected behavior or file placement. Consider specifying a target directory for clarity and to ensure files are extracted to a known location.
- run: unzip pr_number.zip
+ run: unzip pr_number.zip -d ${{ github.workspace }}
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
# the repo requires one approval. if a dependabot change passes tests, that is good enough. | |
- name: approve pr | |
run: gh pr review --approve "$(cat pr_number)" | |
env: | |
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} | |
# if the merge --auto flag were added, and if the repo allowed it at https://github.com/sartography/spiff-arena/settings, | |
# it would set up the pr to auto merge when all requirements were met. but we just want to merge now. | |
- name: set up pr to auto merge when all requirements are met | |
run: gh pr merge --squash "$(cat pr_number)" | |
# the repo requires one approval. if a dependabot change passes tests, that is good enough. | |
- name: approve pr | |
run: gh pr review --approve "$(cat pr_number)" | |
env: | |
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} | |
# if the merge --auto flag were added, and if the repo allowed it at https://github.com/sartography/spiff-arena/settings, | |
# it would set up the pr to auto merge when all requirements were met. but we just want to merge now. | |
- name: set up pr to auto merge when all requirements are met | |
run: gh pr merge --squash "$(cat pr_number)" |
- name: set up pr to auto merge when all requirements are met | ||
run: gh pr merge --squash "$(cat pr_number)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The gh pr merge --squash "$(cat pr_number)"
command is intended to set up the PR for auto-merge but actually performs a squash and merge immediately. If the intention is to enable auto-merge, the correct command should use the --auto
flag instead of performing an immediate merge.
- run: gh pr merge --squash "$(cat pr_number)"
+ run: gh pr merge --auto "$(cat pr_number)"
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
- name: set up pr to auto merge when all requirements are met | |
run: gh pr merge --squash "$(cat pr_number)" | |
- name: set up pr to auto merge when all requirements are met | |
run: gh pr merge --auto "$(cat pr_number)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: .coderabbit.yaml
Files selected for processing (3)
- spiffworkflow-backend/src/spiffworkflow_backend/models/process_instance.py (6 hunks)
- spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_processor.py (2 hunks)
- spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_background_processing_service.py (1 hunks)
Files skipped from review as they are similar to previous changes (3)
- spiffworkflow-backend/src/spiffworkflow_backend/models/process_instance.py
- spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_processor.py
- spiffworkflow-backend/tests/spiffworkflow_backend/unit/test_background_processing_service.py
… added to the table schema w/ burnettk
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: .coderabbit.yaml
Files ignored due to path filters (1)
spiffworkflow-backend/migrations/versions/29b261f5edf4_.py
is excluded by:!spiffworkflow-backend/migrations/**
Files selected for processing (1)
- spiffworkflow-backend/src/spiffworkflow_backend/models/future_task.py (2 hunks)
Files skipped from review as they are similar to previous changes (1)
- spiffworkflow-backend/src/spiffworkflow_backend/models/future_task.py
add column for archived future tasks, for use when instances are suspended
set archived back to false when the instance resumes
add some tests for related background processing service functionality
Summary by CodeRabbit
FutureTaskModel
class.args_to_celery
dictionary.resume
method in theProcessInstanceProcessor
to handle archived future tasks.BackgroundProcessingService
class to cover future task processing scenarios.